1 research outputs found
Verifying And Interpreting Neural Networks using Finite Automata
Verifying properties and interpreting the behaviour of deep neural networks
(DNN) is an important task given their ubiquitous use in applications,
including safety-critical ones, and their blackbox nature. We propose an
automata-theoric approach to tackling problems arising in DNN analysis. We show
that the input-output behaviour of a DNN can be captured precisely by a
(special) weak B\"uchi automaton of exponential size. We show how these can be
used to address common verification and interpretation tasks like adversarial
robustness, minimum sufficient reasons etc. We report on a proof-of-concept
implementation translating DNN to automata on finite words for better
efficiency at the cost of losing precision in analysis